131 research outputs found

    Decorrelation control by the cerebellum achieves oculomotor plant compensation in simulated vestibulo-ocular reflex

    Get PDF
    We introduce decorrelation control as a candidate algorithm for the cerebellar microcircuit and demonstrate its utility for oculomotor plant compensation in a linear model of the vestibulo-ocular reflex (VOR). Using an adaptive-filter representation of cerebellar cortex and an anti-Hebbian learning rule, the algorithm learnt to compensate for the oculomotor plant by minimizing correlations between a predictor variable (eye-movement command) and a target variable (retinal slip), without requiring a motor-error signal. Because it also provides an estimate of the unpredicted component of the target variable, decorrelation control can simplify both motor coordination and sensory acquisition. It thus unifies motor and sensory cerebellar functions

    Adaptive cancelation of self-generated sensory signals in a whisking robot

    Get PDF
    Sensory signals are often caused by one's own active movements. This raises a problem of discriminating between self-generated sensory signals and signals generated by the external world. Such discrimination is of general importance for robotic systems, where operational robustness is dependent on the correct interpretation of sensory signals. Here, we investigate this problem in the context of a whiskered robot. The whisker sensory signal comprises two components: one due to contact with an object (externally generated) and another due to active movement of the whisker (self-generated). We propose a solution to this discrimination problem based on adaptive noise cancelation, where the robot learns to predict the sensory consequences of its own movements using an adaptive filter. The filter inputs (copy of motor commands) are transformed by Laguerre functions instead of the often-used tapped-delay line, which reduces model order and, therefore, computational complexity. Results from a contact-detection task demonstrate that false positives are significantly reduced using the proposed scheme

    World statistics drive learning of cerebellar internal models in adaptive feedback control : a case study using the optokinetic reflex

    Get PDF
    The cerebellum is widely implicated in having an important role in adaptive motor control. Many of the computational studies on cerebellar motor control to date have focused on the associated architecture and learning algorithms in an effort to further understand cerebellar function. In this paper we switch focus to the signals driving cerebellar adaptation that arise through different motor behavior. To do this, we investigate computationally the contribution of the cerebellum to the optokinetic reflex (OKR), a visual feedback control scheme for image stabilization. We develop a computational model of the adaptation of the cerebellar response to the world velocity signals that excite the OKR (where world velocity signals are used to emulate head velocity signals when studying the OKR in head-fixed experimental laboratory conditions). The results show that the filter learnt by the cerebellar model is highly dependent on the power spectrum of the colored noise world velocity excitation signal. Thus, the key finding here is that the cerebellar filter is determined by the statistics of the OKR excitation signal

    Cerebellar Motor Learning: When Is Cortical Plasticity Not Enough?

    Get PDF
    Classical Marr-Albus theories of cerebellar learning employ only cortical sites of plasticity. However, tests of these theories using adaptive calibration of the vestibulo–ocular reflex (VOR) have indicated plasticity in both cerebellar cortex and the brainstem. To resolve this long-standing conflict, we attempted to identify the computational role of the brainstem site, by using an adaptive filter version of the cerebellar microcircuit to model VOR calibration for changes in the oculomotor plant. With only cortical plasticity, introducing a realistic delay in the retinal-slip error signal of 100 ms prevented learning at frequencies higher than 2.5 Hz, although the VOR itself is accurate up to at least 25 Hz. However, the introduction of an additional brainstem site of plasticity, driven by the correlation between cerebellar and vestibular inputs, overcame the 2.5 Hz limitation and allowed learning of accurate high-frequency gains. This “cortex-first” learning mechanism is consistent with a wide variety of evidence concerning the role of the flocculus in VOR calibration, and complements rather than replaces the previously proposed “brainstem-first” mechanism that operates when ocular tracking mechanisms are effective. These results (i) describe a process whereby information originally learnt in one area of the brain (cerebellar cortex) can be transferred and expressed in another (brainstem), and (ii) indicate for the first time why a brainstem site of plasticity is actually required by Marr-Albus type models when high-frequency gains must be learned in the presence of error delay

    Sensorimotor maps can be dynamically calibrated using an adaptive-filter model of the cerebellum

    Get PDF
    Substantial experimental evidence suggests the cerebellum is involved in calibrating sensorimotor maps. Consistent with this involvement is the well-known, but little understood, massive cerebellar projection to maps in the superior colliculus. Map calibration would be a significant new role for the cerebellum given the ubiquity of map representations in the brain, but how it could perform such a task is unclear. Here we investigated a dynamic method for map calibration, based on electrophysiological recordings from the superior colliculus, that used a standard adaptive-filter cerebellar model. The method proved effective for complex distortions of both unimodal and bimodal maps, and also for predictive map-based tracking of moving targets. These results provide the first computational evidence for a novel role for the cerebellum in dynamic sensorimotor map calibration, of potential importance for coordinate alignment during ongoing motor control, and for map calibration in future biomimetic systems. This computational evidence also provides testable experimental predictions concerning the role of the connections between cerebellum and superior colliculus in previously observed dynamic coordinate transformations

    Cerebellum-based Adaptation for Fine Haptic Control over the Space of Uncertain Surfaces

    Get PDF
    This work aims to augment the capacities for haptic perception in the iCub robot to generate a controller for surface exploration. The main task involves moving the hand over an irregular surface with uncertain slope, by concurrently regulating the pressure of the contact. Providing this ability will enable the autonomous extraction of important haptic features, such as texture and shape. We propose a hand controller whose operational space is defined over the surface of contact. The surface is estimated using a robust probabilistic estimator, which is then used for path planning. The motor commands are generated using a feedback controller, taking advantage of the kinematic information available by proprioception. Finally, the effectiveness of this controller is extended using a cerebellar-like adapter that generates reliable pressure tracking over the finger and results in a trajectory with less vulnerability to perturbations. The results of this work are consistent with insights about the role of the cerebellum on haptic perception in humans

    At the Edge of Chaos: How Cerebellar Granular Layer Network Dynamics Can Provide the Basis for Temporal Filters.

    Get PDF
    Models of the cerebellar microcircuit often assume that input signals from the mossy-fibers are expanded and recoded to provide a foundation from which the Purkinje cells can synthesize output filters to implement specific input-signal transformations. Details of this process are however unclear. While previous work has shown that recurrent granule cell inhibition could in principle generate a wide variety of random outputs suitable for coding signal onsets, the more general application for temporally varying signals has yet to be demonstrated. Here we show for the first time that using a mechanism very similar to reservoir computing enables random neuronal networks in the granule cell layer to provide the necessary signal separation and extension from which Purkinje cells could construct basis filters of various time-constants. The main requirement for this is that the network operates in a state of criticality close to the edge of random chaotic behavior. We further show that the lack of recurrent excitation in the granular layer as commonly required in traditional reservoir networks can be circumvented by considering other inherent granular layer features such as inverted input signals or mGluR2 inhibition of Golgi cells. Other properties that facilitate filter construction are direct mossy fiber excitation of Golgi cells, variability of synaptic weights or input signals and output-feedback via the nucleocortical pathway. Our findings are well supported by previous experimental and theoretical work and will help to bridge the gap between system-level models and detailed models of the granular layer network

    Visual-tactile sensory map calibration of a biomimetic whiskered robot

    Get PDF
    © 2016 IEEE. We present an adaptive filter model of cerebellar function applied to the calibration of a tactile sensory map to improve the accuracy of directed movements of a robotic manipulator. This is demonstrated using a platform called Bellabot that incorporates an array of biomimetic tactile whiskers, actuated using electro-active polymer artificial muscles, a camera to provide visual error feedback, and a standard industrial robotic manipulator. The algorithm learns to accommodate imperfections in the sensory map that may be as a result of poor manufacturing tolerances or damage to the sensory array. Such an ability is an important pre-requisite for robust tactile robotic systems operating in the real-world for extended periods of time. In this work the sensory maps have been purposely distorted in order to evaluate the performance of the algorithm

    When is now? Perception of simultaneity

    Get PDF
    We address the following question: Is there a difference (D) between the amount of time for auditory and visual stimuli to be perceived? On each of 1000 trials, observers were presented with a light-sound pair, separated by a stimulus onset asynchrony (SOA) between-250 ms (sound first) and 250 ms. Observers indicated if the light-sound pair came on simultaneously by pressing one of two (yes or no) keys. The SOA most likely to yield affirmative responses was defined as the point of subjective simultaneity (PSS). PSS values were between-21 ms (i.e. sound 21ms before light) and 150 ms. Evidence is presented that each PSS is observer specific. In a second experiment, each observer was tested using two observerstimulus distances. The resultant PSS values are highly correlated (r = 0.954, p = 0.003) suggesting that each observer's PSS is stable. PSS values were significantly affected by observer-stimulus distance, suggesting that observers do not take account of changes in distance on the resultant difference in arrival times of light and sound. The difference RTd in simple reaction time to single visual and auditory stimuli was also estimated; no evidence that RTd is observer specific or stable was found. The implications of these findings for the perception of multisensory stimuli are discussed
    • …
    corecore